This work provides a Deep Reinforcement Learning approach to solving a periodic review inventory control system with stochastic vendor lead times, lost sales, correlated demand, and price matching. While this dynamic program has historically been considered intractable, our results show that several policy learning approaches are competitive with or outperform classical methods. In order to train these algorithms, we develop novel techniques to convert historical data into a simulator. On the theoretical side, we present learnability results on a subclass of inventory control problems, where we provide a provable reduction of the reinforcement learning problem to that of supervised learning. On the algorithmic side, we present a model-based reinforcement learning procedure (Direct Backprop) to solve the periodic review inventory control problem by constructing a differentiable simulator. Under a variety of metrics Direct Backprop outperforms model-free RL and newsvendor baselines, in both simulations and real-world deployments.
translated by 谷歌翻译
多种植者概率的时间序列预测对现实世界的任务(例如需求预测)具有广泛的适用性。神经时间序列的最新工作预测主要关注SEQ2SEQ架构的使用。例如,MQtransFormer(MQCNN的改进)显示了概率需求预测中最新的性能。在本文中,我们考虑通过添加跨实体注意机制以及检索机制来选择要参加哪些实体,从而通过添加跨实体注意机制来提高模型性能。我们演示了我们的新神经体系结构MQRETNN如何利用整个人群的基线模型的编码环境来提高预测准确性。使用MQCNN作为基线模型(由于计算限制,我们不使用MQtransFormer),我们首先在较小的需求预测数据集上显示,通过添加交叉实体注意机制可以提高测试损失约3%每个实体都参加人口中的所有其他实体。然后,我们通过提议的检索方法评估模型 - 作为大规模需求预测应用,用超过200万种产品的大规模需求预测应用,并观察到MQCNN基线的绩效增长约1%。
translated by 谷歌翻译
当前的论文研究在仅假定最佳值函数可线化的设置中,在设置中样本效率增强学习(RL)。最近已经理解,即使在这种看似强大的假设和对生成模型的访问下,最坏情况的样本复杂性也可能是庞大的(即指数)。我们研究了学习者还可以从专家政策中访问交互式演示的设置,并提出一种统计和计算上有效的算法(DELPHI),用于将探索与专家查询融合。特别是,Delphi需要$ \ tilde {\ Mathcal {o}}(d)$专家查询和$ \ texttt {poly} {poly}(d,h,h,| \ mathcal {a} |,1/\ varepsilon)$探索性样本可证明恢复$ \ varepsilon $ -suboptimal策略。与纯RL方法相比,这对应于样品复杂性的指数改善,而专家输入令人惊讶。与先前的模仿学习(IL)方法相比,我们所需的专家演示数量独立于$ h $和$ 1/\ varepsilon $的对数,而所有先前的工作至少需要两者的线性因素,除了对$ $ $ $的依赖性外, D $。为了确定所需的专家查询数量最少,我们表明,在同一环境中,任何其勘探预算是多项式限制的学习者(在$ d,h,$和$ | \ | \ MATHCAL {a} | $方面,需要至少$ \ tilde \ omega(\ sqrt {d})$ oracle调用以恢复与专家的价值函数竞争的策略。在较弱的假设中,专家的政策是线性的,我们表明下限将增加到$ \ tilde \ omega(d)$。
translated by 谷歌翻译
在随机对照试验中的治疗效果(TE)估计的客观评估中的中心障碍是缺乏地面真理(或验证集)来测试其表现。在本文中,我们提供了一种新的交叉验证样方法来解决这一挑战。我们程序的关键洞察力是嘈杂(但不偏不倚)差异估计可以用作RCT的一部分上的地面真理“标签”,以测试在另一部分培训的估计器的性能。我们将这种洞察力与聚集方案相结合,借助跨统计强度的大型RCT,以判断估计估计估计潜在治疗效果的能力的端到端方法。我们在亚马逊供应链中实施的709个RCT评估我们的方法。在Amazon的AB测试中,由于响应变量的重尾性,我们突出了与恢复治疗效果相关的独特困难。在这种重尾的设置中,我们的方法表明,积极低档或截断大值的程序,同时引入偏差降低了足以确保更准确地估计治疗效果的方差。
translated by 谷歌翻译
Reinforcement learning can enable robots to navigate to distant goals while optimizing user-specified reward functions, including preferences for following lanes, staying on paved paths, or avoiding freshly mowed grass. However, online learning from trial-and-error for real-world robots is logistically challenging, and methods that instead can utilize existing datasets of robotic navigation data could be significantly more scalable and enable broader generalization. In this paper, we present ReViND, the first offline RL system for robotic navigation that can leverage previously collected data to optimize user-specified reward functions in the real-world. We evaluate our system for off-road navigation without any additional data collection or fine-tuning, and show that it can navigate to distant goals using only offline training from this dataset, and exhibit behaviors that qualitatively differ based on the user-specified reward function.
translated by 谷歌翻译
We propose an approach for semantic imitation, which uses demonstrations from a source domain, e.g. human videos, to accelerate reinforcement learning (RL) in a different target domain, e.g. a robotic manipulator in a simulated kitchen. Instead of imitating low-level actions like joint velocities, our approach imitates the sequence of demonstrated semantic skills like "opening the microwave" or "turning on the stove". This allows us to transfer demonstrations across environments (e.g. real-world to simulated kitchen) and agent embodiments (e.g. bimanual human demonstration to robotic arm). We evaluate on three challenging cross-domain learning problems and match the performance of demonstration-accelerated RL approaches that require in-domain demonstrations. In a simulated kitchen environment, our approach learns long-horizon robot manipulation tasks, using less than 3 minutes of human video demonstrations from a real-world kitchen. This enables scaling robot learning via the reuse of demonstrations, e.g. collected as human videos, for learning in any number of target domains.
translated by 谷歌翻译
Navigation is one of the most heavily studied problems in robotics, and is conventionally approached as a geometric mapping and planning problem. However, real-world navigation presents a complex set of physical challenges that defies simple geometric abstractions. Machine learning offers a promising way to go beyond geometry and conventional planning, allowing for navigational systems that make decisions based on actual prior experience. Such systems can reason about traversability in ways that go beyond geometry, accounting for the physical outcomes of their actions and exploiting patterns in real-world environments. They can also improve as more data is collected, potentially providing a powerful network effect. In this article, we present a general toolkit for experiential learning of robotic navigation skills that unifies several recent approaches, describe the underlying design principles, summarize experimental results from several of our recent papers, and discuss open problems and directions for future work.
translated by 谷歌翻译
Iterative text revision improves text quality by fixing grammatical errors, rephrasing for better readability or contextual appropriateness, or reorganizing sentence structures throughout a document. Most recent research has focused on understanding and classifying different types of edits in the iterative revision process from human-written text instead of building accurate and robust systems for iterative text revision. In this work, we aim to build an end-to-end text revision system that can iteratively generate helpful edits by explicitly detecting editable spans (where-to-edit) with their corresponding edit intents and then instructing a revision model to revise the detected edit spans. Leveraging datasets from other related text editing NLP tasks, combined with the specification of editable spans, leads our system to more accurately model the process of iterative text refinement, as evidenced by empirical results and human evaluations. Our system significantly outperforms previous baselines on our text revision tasks and other standard text revision tasks, including grammatical error correction, text simplification, sentence fusion, and style transfer. Through extensive qualitative and quantitative analysis, we make vital connections between edit intentions and writing quality, and better computational modeling of iterative text revisions.
translated by 谷歌翻译
Semantic navigation is necessary to deploy mobile robots in uncontrolled environments like our homes, schools, and hospitals. Many learning-based approaches have been proposed in response to the lack of semantic understanding of the classical pipeline for spatial navigation, which builds a geometric map using depth sensors and plans to reach point goals. Broadly, end-to-end learning approaches reactively map sensor inputs to actions with deep neural networks, while modular learning approaches enrich the classical pipeline with learning-based semantic sensing and exploration. But learned visual navigation policies have predominantly been evaluated in simulation. How well do different classes of methods work on a robot? We present a large-scale empirical study of semantic visual navigation methods comparing representative methods from classical, modular, and end-to-end learning approaches across six homes with no prior experience, maps, or instrumentation. We find that modular learning works well in the real world, attaining a 90% success rate. In contrast, end-to-end learning does not, dropping from 77% simulation to 23% real-world success rate due to a large image domain gap between simulation and reality. For practitioners, we show that modular learning is a reliable approach to navigate to objects: modularity and abstraction in policy design enable Sim-to-Real transfer. For researchers, we identify two key issues that prevent today's simulators from being reliable evaluation benchmarks - (A) a large Sim-to-Real gap in images and (B) a disconnect between simulation and real-world error modes - and propose concrete steps forward.
translated by 谷歌翻译
We consider the problem of embodied visual navigation given an image-goal (ImageNav) where an agent is initialized in an unfamiliar environment and tasked with navigating to a location 'described' by an image. Unlike related navigation tasks, ImageNav does not have a standardized task definition which makes comparison across methods difficult. Further, existing formulations have two problematic properties; (1) image-goals are sampled from random locations which can lead to ambiguity (e.g., looking at walls), and (2) image-goals match the camera specification and embodiment of the agent; this rigidity is limiting when considering user-driven downstream applications. We present the Instance-specific ImageNav task (InstanceImageNav) to address these limitations. Specifically, the goal image is 'focused' on some particular object instance in the scene and is taken with camera parameters independent of the agent. We instantiate InstanceImageNav in the Habitat Simulator using scenes from the Habitat-Matterport3D dataset (HM3D) and release a standardized benchmark to measure community progress.
translated by 谷歌翻译